- 25 Sep, 2020 3 commits
-
-
Christoph Hellwig authored
Move the comment documenting dma_addr_t away from the dma_map_ops definition which isn't very related to it, and toward DMA_MAPPING_ERROR, which is somewhat related. Add a little blurb about DMA_MAPPING_ERROR as well. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
Move the valid_dma_direction helper to a more suitable header, and clean it up to use the proper enum as well as removing pointless braces. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
This value is only used by a PCMCIA driver and not very useful. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Dominik Brodowski <linux@dominikbrodwski.net>
-
- 21 Sep, 2020 2 commits
-
-
Robin Murphy authored
Checking for a nonzero dma_pfn_offset was a quick shortcut to validate whether the DMA == phys assumption could hold at all. Checking for a non-NULL dma_range_map is not quite equivalent, since a map may be present to describe a limited DMA window even without an offset, and thus this check can now yield false positives. However, it only ever served to short-circuit going all the way through to __arm_lpae_alloc_pages(), failing the canonical test there, and having a bit more to clean up. As such, we can simply remove it without loss of correctness. Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Xu Wang authored
Replace a comma between expression statements by a semicolon. Signed-off-by: Xu Wang <vulab@iscas.ac.cn> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
- 17 Sep, 2020 7 commits
-
-
Jim Quinlan authored
The new field 'dma_range_map' in struct device is used to facilitate the use of single or multiple offsets between mapping regions of cpu addrs and dma addrs. It subsumes the role of "dev->dma_pfn_offset" which was only capable of holding a single uniform offset and had no region bounds checking. The function of_dma_get_range() has been modified so that it takes a single argument -- the device node -- and returns a map, NULL, or an error code. The map is an array that holds the information regarding the DMA regions. Each range entry contains the address offset, the cpu_start address, the dma_start address, and the size of the region. of_dma_configure() is the typical manner to set range offsets but there are a number of ad hoc assignments to "dev->dma_pfn_offset" in the kernel driver code. These cases now invoke the function dma_direct_set_offset(dev, cpu_addr, dma_addr, size). Signed-off-by: Jim Quinlan <james.quinlan@broadcom.com> [hch: various interface cleanups] Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org> Tested-by: Mathieu Poirier <mathieu.poirier@linaro.org> Tested-by: Nathan Chancellor <natechancellor@gmail.com>
-
Christoph Hellwig authored
As the comment in usb_alloc_dev correctly states, drivers can't use the DMA API on usb device, and at least calling dma_set_mask on them is highly dangerous. Unlike what the comment states upper level drivers also can't really use the presence of a dma mask to check for DMA support, as the dma_mask is set by default for most busses. Setting the dma_mask comes from "[PATCH] usbcore dma updates (and doc)" in BitKeeper times, as it seems like it was primarily for setting the NETIF_F_HIGHDMA flag in USB drivers, something that has long been fixed up since. Setting the dma_pfn_offset comes from commit b44bbc46 ("usb: core: setup dma_pfn_offset for USB devices and, interfaces"), which worked around the fact that the scsi_calculate_bounce_limits functions wasn't going through the proper driver interface to query DMA information, but that function was removed in commit 21e07dba ("scsi: reduce use of block bounce buffers") years ago. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
Christoph Hellwig authored
The DMA offset notifier can only be used if PHYS_OFFSET is at least KEYSTONE_HIGH_PHYS_START, which can't be represented by a 32-bit phys_addr_t. Currently the code compiles fine despite that, a pending change to the DMA offset handling would create a compiler warning for this case. Add an ifdef to not compile the code except for LPAE configs. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
-
Christoph Hellwig authored
Move the helpers to translate to and from direct mapping DMA addresses to dma-direct.h. This not only is the most logical place, but the new placement also avoids dependency loops with pending commits. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
-
Christoph Hellwig authored
dma_to_virt is entirely unused, remove it. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
The __arch_page_to_dma hook is long gone. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Thomas Tai authored
When booting the kernel v5.9-rc4 on a VM, the kernel would panic when printing a warning message in swiotlb_map(). The dev->dma_mask must not be a NULL pointer when calling the dma mapping layer. A NULL pointer check can potentially avoid the panic. Signed-off-by: Thomas Tai <thomas.tai@oracle.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
- 11 Sep, 2020 14 commits
-
-
Christoph Hellwig authored
dma_declare_coherent_memory should not be in a DMA API guide aimed at driver writers (that is consumers of the API). Move it to a comment near the function instead. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
-
Christoph Hellwig authored
Add a new file that contains helpers for misc DMA ops, which is only built when CONFIG_DMA_OPS is set. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
-
Christoph Hellwig authored
The __phys_to_dma vs phys_to_dma distinction isn't exactly obvious. Try to improve the situation by renaming __phys_to_dma to phys_to_dma_unencryped, and not forcing architectures that want to override phys_to_dma to actually provide __phys_to_dma. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
-
Christoph Hellwig authored
There is no harm in just always clearing the SME encryption bit, while significantly simplifying the interface. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
-
Christoph Hellwig authored
Replace the currently open code copy. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
-
Christoph Hellwig authored
Move the detailed gfp_t setup from __dma_direct_alloc_pages into the caller to clean things up a little. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
-
Christoph Hellwig authored
Just merge these helpers into the main dma_direct_{alloc,free} routines, as the additional checks are always false for the two callers. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
-
Christoph Hellwig authored
Add back a hook to optimize dcache flushing after reading executable code using DMA. This gets ia64 out of the business of pretending to be dma incoherent just for this optimization. Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Christoph Hellwig authored
Driver that select DMA_OPS need to depend on HAS_DMA support to work. The vop driver was missing that dependency, so add it, and also add a another depends in DMA_OPS itself. That won't fix the issue due to how the Kconfig dependencies work, but at least produce a warning about unmet dependencies. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
-
Christoph Hellwig authored
The jazzdma ops implement support for a very basic IOMMU. Thus we really should not use the dma-direct code that takes physical address limits into account. This survived through the great MIPS DMA ops cleanup mostly because I was lazy, but now it is time to fully split the implementations. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
-
Christoph Hellwig authored
Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
-
Christoph Hellwig authored
When transferring DMA ownership back to the CPU there should never be any writeback from the cache, as the buffer was owned by the device until now. Instead it should just be invalidated for the mapping directions where the device could have written data. Note that the changes rely on the fact that kmap_atomic is stubbed out for the !HIGHMEM case to simplify the code a bit. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
-
Christoph Hellwig authored
Now that the main dma mapping entry points are out of line most of the symbols in dma-debug.c can only be called from built-in code. Remove the unused exports. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
-
Christoph Hellwig authored
dma_dummy_ops is only used by the ACPI code, which can't be modular. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
-
- 03 Sep, 2020 2 commits
-
-
Nicolin Chen authored
The default segment_boundary_mask was set to DMA_BIT_MAKS(32) a decade ago by referencing SCSI/block subsystem, as a 32-bit mask was good enough for most of the devices. Now more and more drivers set dma_masks above DMA_BIT_MAKS(32) while only a handful of them call dma_set_seg_boundary(). This means that most drivers have a 4GB segmention boundary because DMA API returns a 32-bit default value, though they might not really have such a limit. The default segment_boundary_mask should mean "no limit" since the device doesn't explicitly set the mask. But a 32-bit mask certainly limits those devices capable of 32+ bits addressing. So this patch sets default segment_boundary_mask to ULONG_MAX. Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com> Acked-by: Niklas Schnelle <schnelle@linux.ibm.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Nicolin Chen authored
We found that callers of dma_get_seg_boundary mostly do an ALIGN with page mask and then do a page shift to get number of pages: ALIGN(boundary + 1, 1 << shift) >> shift However, the boundary might be as large as ULONG_MAX, which means that a device has no specific boundary limit. So either "+ 1" or passing it to ALIGN() would potentially overflow. According to kernel defines: #define ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask)) #define ALIGN(x, a) ALIGN_MASK(x, (typeof(x))(a) - 1) We can simplify the logic here into a helper function doing: ALIGN(boundary + 1, 1 << shift) >> shift = ALIGN_MASK(b + 1, (1 << s) - 1) >> s = {[b + 1 + (1 << s) - 1] & ~[(1 << s) - 1]} >> s = [b + 1 + (1 << s) - 1] >> s = [b + (1 << s)] >> s = (b >> s) + 1 This patch introduces and applies dma_get_seg_boundary_nr_pages() as an overflow-free helper for the dma_get_seg_boundary() callers to get numbers of pages. It also takes care of the NULL dev case for non-DMA API callers. Suggested-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com> Acked-by: Niklas Schnelle <schnelle@linux.ibm.com> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Signed-off-by: Christoph Hellwig <hch@lst.de>
-
- 01 Sep, 2020 3 commits
-
-
Barry Song authored
CMA_MAX_NAME should be visible to CMA's users as they might need it to set the name of CMA areas and avoid hardcoding the size locally. So this patch moves CMA_MAX_NAME from local header file to include/linux header file and removes the hardcode in both hugetlb.c and contiguous.c. Signed-off-by: Barry Song <song.bao.hua@hisilicon.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Barry Song authored
Right now, smmu is using dma_alloc_coherent() to get memory to save queues and tables. Typically, on ARM64 server, there is a default CMA located at node0, which could be far away from node2, node3 etc. with this patch, smmu will get memory from local numa node to save command queues and page tables. that means dma_unmap latency will be shrunk much. Meanwhile, when iommu.passthrough is on, device drivers which call dma_ alloc_coherent() will also get local memory and avoid the travel between numa nodes. Acked-by: Will Deacon <will@kernel.org> Signed-off-by: Barry Song <song.bao.hua@hisilicon.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
Barry Song authored
Right now, drivers like ARM SMMU are using dma_alloc_coherent() to get coherent DMA buffers to save their command queues and page tables. As there is only one default CMA in the whole system, SMMUs on nodes other than node0 will get remote memory. This leads to significant latency. This patch provides per-numa CMA so that drivers like SMMU can get local memory. Tests show localizing CMA can decrease dma_unmap latency much. For instance, before this patch, SMMU on node2 has to wait for more than 560ns for the completion of CMD_SYNC in an empty command queue; with this patch, it needs 240ns only. A positive side effect of this patch would be improving performance even further for those users who are worried about performance more than DMA security and use iommu.passthrough=1 to skip IOMMU. With local CMA, all drivers can get local coherent DMA buffers. Also, this patch changes the default CONFIG_CMA_AREAS to 19 in NUMA. As 1+CONFIG_CMA_AREAS should be quite enough for most servers on the market even they enable both hugetlb_cma and pernuma_cma. 2 numa nodes: 2(hugetlb) + 2(pernuma) + 1(default global cma) = 5 4 numa nodes: 4(hugetlb) + 4(pernuma) + 1(default global cma) = 9 8 numa nodes: 8(hugetlb) + 8(pernuma) + 1(default global cma) = 17 Signed-off-by: Barry Song <song.bao.hua@hisilicon.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
-
- 30 Aug, 2020 9 commits
-
-
Linus Torvalds authored
-
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Linus Torvalds authored
Pull crypto fixes from Herbert Xu: - fix regression in af_alg that affects iwd - restore polling delay in qat - fix double free in ingenic on error path - fix potential build failure in sa2ul due to missing Kconfig dependency * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: crypto: af_alg - Work around empty control messages without MSG_MORE crypto: sa2ul - add Kconfig selects to fix build error crypto: ingenic - Drop kfree for memory allocated with devm_kzalloc crypto: qat - add delay before polling mailbox
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull x86 fixes from Thomas Gleixner: "Three interrupt related fixes for X86: - Move disabling of the local APIC after invoking fixup_irqs() to ensure that interrupts which are incoming are noted in the IRR and not ignored. - Unbreak affinity setting. The rework of the entry code reused the regular exception entry code for device interrupts. The vector number is pushed into the errorcode slot on the stack which is then lifted into an argument and set to -1 because that's regs->orig_ax which is used in quite some places to check whether the entry came from a syscall. But it was overlooked that orig_ax is used in the affinity cleanup code to validate whether the interrupt has arrived on the new target. It turned out that this vector check is pointless because interrupts are never moved from one vector to another on the same CPU. That check is a historical leftover from the time where x86 supported multi-CPU affinities, but not longer needed with the now strict single CPU affinity. Famous last words ... - Add a missing check for an empty cpumask into the matrix allocator. The affinity change added a warning to catch the case where an interrupt is moved on the same CPU to a different vector. This triggers because a condition with an empty cpumask returns an assignment from the allocator as the allocator uses for_each_cpu() without checking the cpumask for being empty. The historical inconsistent for_each_cpu() behaviour of ignoring the cpumask and unconditionally claiming that CPU0 is in the mask struck again. Sigh. plus a new entry into the MAINTAINER file for the HPE/UV platform" * tag 'x86-urgent-2020-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: genirq/matrix: Deal with the sillyness of for_each_cpu() on UP x86/irq: Unbreak interrupt affinity setting x86/hotplug: Silence APIC only after all interrupts are migrated MAINTAINERS: Add entry for HPE Superdome Flex (UV) maintainers
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull irq fixes from Thomas Gleixner: "A set of fixes for interrupt chip drivers: - Revert the platform driver conversion of interrupt chip drivers as it turned out to create more problems than it solves. - Fix a trivial typo in the new module helpers which made probing reliably fail. - Small fixes in the STM32 and MIPS Ingenic drivers - The TI firmware rework which had badly managed dependencies and had to wait post rc1" * tag 'irq-urgent-2020-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: irqchip/ingenic: Leave parent IRQ unmasked on suspend irqchip/stm32-exti: Avoid losing interrupts due to clearing pending bits by mistake irqchip: Revert modular support for drivers using IRQCHIP_PLATFORM_DRIVER helperse irqchip: Fix probing deferal when using IRQCHIP_PLATFORM_DRIVER helpers arm64: dts: k3-am65: Update the RM resource types arm64: dts: k3-am65: ti-sci-inta/intr: Update to latest bindings arm64: dts: k3-j721e: ti-sci-inta/intr: Update to latest bindings irqchip/ti-sci-inta: Add support for INTA directly connecting to GIC irqchip/ti-sci-inta: Do not store TISCI device id in platform device id field dt-bindings: irqchip: Convert ti, sci-inta bindings to yaml dt-bindings: irqchip: ti, sci-inta: Update docs to support different parent. irqchip/ti-sci-intr: Add support for INTR being a parent to INTR dt-bindings: irqchip: Convert ti, sci-intr bindings to yaml dt-bindings: irqchip: ti, sci-intr: Update bindings to drop the usage of gic as parent firmware: ti_sci: Add support for getting resource with subtype firmware: ti_sci: Drop unused structure ti_sci_rm_type_map firmware: ti_sci: Drop the device id to resource type translation
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull scheduler fix from Thomas Gleixner: "A single fix for the scheduler: - Make is_idle_task() __always_inline to prevent the compiler from putting it out of line into the wrong section because it's used inside noinstr sections" * tag 'sched-urgent-2020-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: sched: Use __always_inline on is_idle_task()
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds authored
Pull locking fixes from Thomas Gleixner: "A set of fixes for lockdep, tracing and RCU: - Prevent recursion by using raw_cpu_* operations - Fixup the interrupt state in the cpu idle code to be consistent - Push rcu_idle_enter/exit() invocations deeper into the idle path so that the lock operations are inside the RCU watching sections - Move trace_cpu_idle() into generic code so it's called before RCU goes idle. - Handle raw_local_irq* vs. local_irq* operations correctly - Move the tracepoints out from under the lockdep recursion handling which turned out to be fragile and inconsistent" * tag 'locking-urgent-2020-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: lockdep,trace: Expose tracepoints lockdep: Only trace IRQ edges mips: Implement arch_irqs_disabled() arm64: Implement arch_irqs_disabled() nds32: Implement arch_irqs_disabled() locking/lockdep: Cleanup x86/entry: Remove unused THUNKs cpuidle: Move trace_cpu_idle() into generic code cpuidle: Make CPUIDLE_FLAG_TLB_FLUSHED generic sched,idle,rcu: Push rcu_idle deeper into the idle path cpuidle: Fixup IRQ state lockdep: Use raw_cpu_*() for per-cpu variables
-
git://git.samba.org/sfrench/cifs-2.6Linus Torvalds authored
Pull cfis fix from Steve French: "DFS fix for referral problem when using SMB1" * tag '5.9-rc2-smb-fix' of git://git.samba.org/sfrench/cifs-2.6: cifs: fix check of tcon dfs in smb1
-
git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linuxLinus Torvalds authored
Pull powerpc fixes from Michael Ellerman: - Revert our removal of PROT_SAO, at least one user expressed an interest in using it on Power9. Instead don't allow it to be used in guests unless enabled explicitly at compile time. - A fix for a crash introduced by a recent change to FP handling. - Revert a change to our idle code that left Power10 with no idle support. - One minor fix for the new scv system call path to set PPR. - Fix a crash in our "generic" PMU if branch stack events were enabled. - A fix for the IMC PMU, to correctly identify host kernel samples. - The ADB_PMU powermac code was found to be incompatible with VMAP_STACK, so make them incompatible in Kconfig until the code can be fixed. - A build fix in drivers/video/fbdev/controlfb.c, and a documentation fix. Thanks to Alexey Kardashevskiy, Athira Rajeev, Christophe Leroy, Giuseppe Sacco, Madhavan Srinivasan, Milton Miller, Nicholas Piggin, Pratik Rajesh Sampat, Randy Dunlap, Shawn Anastasio, Vaidyanathan Srinivasan. * tag 'powerpc-5.9-4' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux: powerpc/32s: Disable VMAP stack which CONFIG_ADB_PMU Revert "powerpc/powernv/idle: Replace CPU feature check with PVR check" powerpc/perf: Fix reading of MSR[HV/PR] bits in trace-imc powerpc/perf: Fix crashes with generic_compat_pmu & BHRB powerpc/64s: Fix crash in load_fp_state() due to fpexc_mode powerpc/64s: scv entry should set PPR Documentation/powerpc: fix malformed table in syscall64-abi video: fbdev: controlfb: Fix build for COMPILE_TEST=y && PPC_PMAC=n selftests/powerpc: Update PROT_SAO test to skip ISA 3.1 powerpc/64s: Disallow PROT_SAO in LPARs by default Revert "powerpc/64s: Remove PROT_SAO support"
-
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usbLinus Torvalds authored
Pull USB fixes from Greg KH: "Let's try this again... Here are some USB fixes for 5.9-rc3. This differs from the previous pull request for this release in that the usb gadget patch now does not break some systems, and actually does what it was intended to do. Many thanks to Marek Szyprowski for quickly noticing and testing the patch from Andy Shevchenko to resolve this issue. Additionally, some more new USB quirks have been added to get some new devices to work properly based on user reports. Other than that, the patches are all here, and they contain: - usb gadget driver fixes - xhci driver fixes - typec fixes - new quirks and ids - fixes for USB patches that went into 5.9-rc1. All of these have been tested in linux-next with no reported issues" * tag 'usb-5.9-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (33 commits) usb: storage: Add unusual_uas entry for Sony PSZ drives USB: Ignore UAS for JMicron JMS567 ATA/ATAPI Bridge usb: host: ohci-exynos: Fix error handling in exynos_ohci_probe() USB: gadget: u_f: Unbreak offset calculation in VLAs USB: quirks: Ignore duplicate endpoint on Sound Devices MixPre-D usb: typec: tcpm: Fix Fix source hard reset response for TDA 2.3.1.1 and TDA 2.3.1.2 failures USB: PHY: JZ4770: Fix static checker warning. USB: gadget: f_ncm: add bounds checks to ncm_unwrap_ntb() USB: gadget: u_f: add overflow checks to VLA macros xhci: Always restore EP_SOFT_CLEAR_TOGGLE even if ep reset failed xhci: Do warm-reset when both CAS and XDEV_RESUME are set usb: host: xhci: fix ep context print mismatch in debugfs usb: uas: Add quirk for PNY Pro Elite tools: usb: move to tools buildsystem USB: Fix device driver race USB: Also match device drivers using the ->match vfunc usb: host: xhci-tegra: fix tegra_xusb_get_phy() usb: host: xhci-tegra: otg usb2/usb3 port init usb: hcd: Fix use after free in usb_hcd_pci_remove() usb: typec: ucsi: Hold con->lock for the entire duration of ucsi_register_port() ...
-