- 21 Aug, 2021 22 commits
-
-
Marco Chiappero authored
Currently all the functions related to the activation of the PFVF protocol, both on PF and VF, include the direction specific "vf2pf" name. Replace the existing naming schema with: - a direction agnostic naming, that applies to both PF and VF, for the function pointer ("pfvf") - a direction specific naming schema for the implementations ("pf2vf" or "vf2pf") In particular this patch renames: - adf_pf_enable_vf2pf_comms() in adf_enable_pf2vf_comms() - enable_vf2pf_comms() in enable_pfvf_comms() Signed-off-by: Marco Chiappero <marco.chiappero@intel.com> Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Fiona Trahe <fiona.trahe@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Marco Chiappero authored
Make sure all the steps in the initialization sequence are complete before any completion event notification. Signed-off-by: Marco Chiappero <marco.chiappero@intel.com> Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Fiona Trahe <fiona.trahe@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Giovanni Cabiddu authored
Move IOV functions at the end of hw_data so that PFVF functions related functions are group together. Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Fiona Trahe <fiona.trahe@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Marco Chiappero authored
At start and shutdown, VFs notify the PF about their state. These notifications are carried out through a message exchange using the PFVF protocol. Function names lead to believe they do perform init or shutdown logic. This is to fix the naming to better reflect their purpose. Signed-off-by: Marco Chiappero <marco.chiappero@intel.com> Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Fiona Trahe <fiona.trahe@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Kanchana Velusamy authored
In the PF interrupt handler, the interrupt is disabled for a set of VFs by writing to the interrupt source mask register, ERRMSK. The interrupt is re-enabled in the bottom half handler by writing to the same CSR. This is done through the functions enable_vf2pf_interrupts() and disable_vf2pf_interrupts() which perform a read-modify-write operation on the ERRMSK registers to mask and unmask the source of interrupt. There can be a race condition where the top half handler for one VF interrupt runs just as the bottom half for another VF is about to re-enable the interrupt. Depending on whether the top or bottom half updates the CSR first, this would result either in a spurious interrupt or in the interrupt not being re-enabled. This patch protects the access of ERRMSK with a spinlock. The functions adf_enable_vf2pf_interrupts() and adf_disable_vf2pf_interrupts() have been changed to acquire a spin lock before accessing and modifying the ERRMSK registers. These functions use spin_lock_irqsave() to disable IRQs and avoid potential deadlocks. In addition, the function adf_disable_vf2pf_interrupts_irq() has been added. This uses spin_lock() and it is meant to be used in the top half only. Signed-off-by: Kanchana Velusamy <kanchanax.velusamy@intel.com> Co-developed-by: Marco Chiappero <marco.chiappero@intel.com> Signed-off-by: Marco Chiappero <marco.chiappero@intel.com> Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Fiona Trahe <fiona.trahe@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Marco Chiappero authored
Interrupt code to enable interrupts from PF does not belong to the protocol code, so move it to the interrupt handling specific file for better code organization. Signed-off-by: Marco Chiappero <marco.chiappero@intel.com> Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Fiona Trahe <fiona.trahe@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Marco Chiappero authored
Use reinit_completion() to set to a clean state a completion variable, used to coordinate the VF to PF request-response flow, before every new VF request. Signed-off-by: Marco Chiappero <marco.chiappero@intel.com> Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Fiona Trahe <fiona.trahe@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Svyatoslav Pankratov authored
The PF driver uses the tasklet vf2pf_bh_tasklet to schedule a workqueue to handle the vf2vf protocol (pf2vf_resp_wq). Since the tasklet is only used to schedule the workqueue, this patch removes it and schedules the pf2vf_resp_wq workqueue directly for the top half. Signed-off-by: Svyatoslav Pankratov <svyatoslav.pankratov@intel.com> Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Marco Chiappero <marco.chiappero@intel.com> Reviewed-by: Fiona Trahe <fiona.trahe@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Marco Chiappero authored
Rename ADF_PFVF_COMPATIBILITY_VERSION in ADF_PFVF_COMPAT_THIS_VERSION since it is used to indicate the current version of the PFVF protocol. Signed-off-by: Marco Chiappero <marco.chiappero@intel.com> Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Fiona Trahe <fiona.trahe@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Marco Chiappero authored
There is a chance that the PFVF handler, adf_vf2pf_req_hndl(), runs twice for the same request when multiple interrupts come simultaneously from different VFs. Since the source VF is identified by a positional bit set in the ERRSOU registers and that is not cleared until the bottom half completes, new top halves from other VFs may reschedule a second bottom half for previous interrupts. This patch solves the problem in the ISR handler by not considering sources with already disabled interrupts (and processing pending), as set in the ERRMSK registers. Also, move some definitions where actually needed. Signed-off-by: Marco Chiappero <marco.chiappero@intel.com> Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Fiona Trahe <fiona.trahe@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Giovanni Cabiddu authored
QAT GEN2 devices suffer from a defect where the MSI interrupt can be sent multiple times. If the second (spurious) interrupt is handled before the bottom half handler runs, then the extra interrupt is effectively ignored because the bottom half is only scheduled once. However, if the top half runs again after the bottom half runs, this will appear as a spurious PF to VF interrupt. This can be avoided by checking the interrupt mask register in addition to the interrupt source register in the interrupt handler. This patch is based on earlier work done by Conor McLoughlin. Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Co-developed-by: Marco Chiappero <marco.chiappero@intel.com> Signed-off-by: Marco Chiappero <marco.chiappero@intel.com> Reviewed-by: Fiona Trahe <fiona.trahe@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Giovanni Cabiddu authored
The top half of the VF drivers handled only a source at the time. If an interrupt for PF2VF and bundle occurred at the same time, the ISR scheduled only the bottom half for PF2VF. This patch fixes the VF top half so that if both sources of interrupt trigger at the same time, both bottom halves are scheduled. This patch is based on earlier work done by Conor McLoughlin. Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Marco Chiappero <marco.chiappero@intel.com> Reviewed-by: Fiona Trahe <fiona.trahe@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Giovanni Cabiddu authored
The function adf_dev_init() ignores the error code reported by enable_vf2pf_comms(). If the latter fails, e.g. the VF is not compatible with the pf, then the load of the VF driver progresses. This patch changes adf_dev_init() so that the error code from enable_vf2pf_comms() is returned to the caller. Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Marco Chiappero <marco.chiappero@intel.com> Reviewed-by: Fiona Trahe <fiona.trahe@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Marco Chiappero authored
Enable device interrupts after the setup of the interrupt handlers. Signed-off-by: Marco Chiappero <marco.chiappero@intel.com> Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Fiona Trahe <fiona.trahe@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Marco Chiappero authored
Remove the empty implementation of sriov_configure() and set the sriov_configure member of the pci_driver structure to NULL. This way, if a user tries to enable VFs on a device, when kernel and driver are built with CONFIG_PCI_IOV=n, the kernel reports an error message saying that the driver does not support SRIOV configuration via sysfs. Signed-off-by: Marco Chiappero <marco.chiappero@intel.com> Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Fiona Trahe <fiona.trahe@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Giovanni Cabiddu authored
Replace vf_mask type with unsigned long to avoid a stack-out-of-bound. This is to fix the following warning reported by KASAN the first time adf_msix_isr_ae() gets called. [ 692.091987] BUG: KASAN: stack-out-of-bounds in find_first_bit+0x28/0x50 [ 692.092017] Read of size 8 at addr ffff88afdf789e60 by task swapper/32/0 [ 692.092076] Call Trace: [ 692.092089] <IRQ> [ 692.092101] dump_stack+0x9c/0xcf [ 692.092132] print_address_description.constprop.0+0x18/0x130 [ 692.092164] ? find_first_bit+0x28/0x50 [ 692.092185] kasan_report.cold+0x7f/0x111 [ 692.092213] ? static_obj+0x10/0x80 [ 692.092234] ? find_first_bit+0x28/0x50 [ 692.092262] find_first_bit+0x28/0x50 [ 692.092288] adf_msix_isr_ae+0x16e/0x230 [intel_qat] Fixes: ed8ccaef ("crypto: qat - Add support for SRIOV") Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Reviewed-by: Marco Chiappero <marco.chiappero@intel.com> Reviewed-by: Fiona Trahe <fiona.trahe@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Christophe JAILLET authored
s/Enable/Disable/ when describing 'adf_disable_aer()' Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Christophe JAILLET authored
If an error occurs after a 'adf_enable_aer()' call, it must be undone by a corresponding 'adf_disable_aer()' call, as already done in the remove function. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Giovanni Cabiddu authored
Change the DMA mask from 64 to 48 for Gen2 devices as they cannot handle addresses greater than 48 bits. Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Christophe JAILLET authored
The wrappers in include/linux/pci-dma-compat.h should go away. Replace 'pci_set_dma_mask/pci_set_consistent_dma_mask' by an equivalent and less verbose 'dma_set_mask_and_coherent()' call. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ben Hutchings authored
lockdep complains that in omap-aes, the list_lock is taken both with softirqs enabled at probe time, and also in softirq context, which could lead to a deadlock: ================================ WARNING: inconsistent lock state 5.14.0-rc1-00035-gc836005b01c5-dirty #69 Not tainted -------------------------------- inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage. ksoftirqd/0/7 [HC0[0]:SC1[3]:HE1:SE0] takes: bf00e014 (list_lock){+.?.}-{2:2}, at: omap_aes_find_dev+0x18/0x54 [omap_aes_driver] {SOFTIRQ-ON-W} state was registered at: _raw_spin_lock+0x40/0x50 omap_aes_probe+0x1d4/0x664 [omap_aes_driver] platform_probe+0x58/0xb8 really_probe+0xbc/0x314 __driver_probe_device+0x80/0xe4 driver_probe_device+0x30/0xc8 __driver_attach+0x70/0xf4 bus_for_each_dev+0x70/0xb4 bus_add_driver+0xf0/0x1d4 driver_register+0x74/0x108 do_one_initcall+0x84/0x2e4 do_init_module+0x5c/0x240 load_module+0x221c/0x2584 sys_finit_module+0xb0/0xec ret_fast_syscall+0x0/0x2c 0xbed90b30 irq event stamp: 111800 hardirqs last enabled at (111800): [<c02a21e4>] __kmalloc+0x484/0x5ec hardirqs last disabled at (111799): [<c02a21f0>] __kmalloc+0x490/0x5ec softirqs last enabled at (111776): [<c01015f0>] __do_softirq+0x2b8/0x4d0 softirqs last disabled at (111781): [<c0135948>] run_ksoftirqd+0x34/0x50 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(list_lock); <Interrupt> lock(list_lock); *** DEADLOCK *** 2 locks held by ksoftirqd/0/7: #0: c0f5e8c8 (rcu_read_lock){....}-{1:2}, at: netif_receive_skb+0x6c/0x260 #1: c0f5e8c8 (rcu_read_lock){....}-{1:2}, at: ip_local_deliver_finish+0x2c/0xdc stack backtrace: CPU: 0 PID: 7 Comm: ksoftirqd/0 Not tainted 5.14.0-rc1-00035-gc836005b01c5-dirty #69 Hardware name: Generic AM43 (Flattened Device Tree) [<c010e6e0>] (unwind_backtrace) from [<c010b9d0>] (show_stack+0x10/0x14) [<c010b9d0>] (show_stack) from [<c017c640>] (mark_lock.part.17+0x5bc/0xd04) [<c017c640>] (mark_lock.part.17) from [<c017d9e4>] (__lock_acquire+0x960/0x2fa4) [<c017d9e4>] (__lock_acquire) from [<c0180980>] (lock_acquire+0x10c/0x358) [<c0180980>] (lock_acquire) from [<c093d324>] (_raw_spin_lock_bh+0x44/0x58) [<c093d324>] (_raw_spin_lock_bh) from [<bf00b258>] (omap_aes_find_dev+0x18/0x54 [omap_aes_driver]) [<bf00b258>] (omap_aes_find_dev [omap_aes_driver]) from [<bf00b328>] (omap_aes_crypt+0x94/0xd4 [omap_aes_driver]) [<bf00b328>] (omap_aes_crypt [omap_aes_driver]) from [<c08ac6d0>] (esp_input+0x1b0/0x2c8) [<c08ac6d0>] (esp_input) from [<c08c9e90>] (xfrm_input+0x410/0x1290) [<c08c9e90>] (xfrm_input) from [<c08b6374>] (xfrm4_esp_rcv+0x54/0x11c) [<c08b6374>] (xfrm4_esp_rcv) from [<c0838840>] (ip_protocol_deliver_rcu+0x48/0x3bc) [<c0838840>] (ip_protocol_deliver_rcu) from [<c0838c50>] (ip_local_deliver_finish+0x9c/0xdc) [<c0838c50>] (ip_local_deliver_finish) from [<c0838dd8>] (ip_local_deliver+0x148/0x1b0) [<c0838dd8>] (ip_local_deliver) from [<c0838f5c>] (ip_rcv+0x11c/0x180) [<c0838f5c>] (ip_rcv) from [<c077e3a4>] (__netif_receive_skb_one_core+0x54/0x74) [<c077e3a4>] (__netif_receive_skb_one_core) from [<c077e588>] (netif_receive_skb+0xa8/0x260) [<c077e588>] (netif_receive_skb) from [<c068d6d4>] (cpsw_rx_handler+0x224/0x2fc) [<c068d6d4>] (cpsw_rx_handler) from [<c0688ccc>] (__cpdma_chan_process+0xf4/0x188) [<c0688ccc>] (__cpdma_chan_process) from [<c068a0c0>] (cpdma_chan_process+0x3c/0x5c) [<c068a0c0>] (cpdma_chan_process) from [<c0690e14>] (cpsw_rx_mq_poll+0x44/0x98) [<c0690e14>] (cpsw_rx_mq_poll) from [<c0780810>] (__napi_poll+0x28/0x268) [<c0780810>] (__napi_poll) from [<c0780c64>] (net_rx_action+0xcc/0x204) [<c0780c64>] (net_rx_action) from [<c0101478>] (__do_softirq+0x140/0x4d0) [<c0101478>] (__do_softirq) from [<c0135948>] (run_ksoftirqd+0x34/0x50) [<c0135948>] (run_ksoftirqd) from [<c01583b8>] (smpboot_thread_fn+0xf4/0x1d8) [<c01583b8>] (smpboot_thread_fn) from [<c01546dc>] (kthread+0x14c/0x174) [<c01546dc>] (kthread) from [<c010013c>] (ret_from_fork+0x14/0x38) ... The omap-des and omap-sham drivers appear to have a similar issue. Fix this by using spin_{,un}lock_bh() around device list access in all the probe and remove functions. Signed-off-by: Ben Hutchings <ben.hutchings@mind.be> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ben Hutchings authored
omap_crypto_cleanup() currently copies data from sg to orig if either copy flag is set. However OMAP_CRYPTO_SG_COPIED means that sg refers to the same pages as orig, truncated to len bytes. There is no need to copy in this case. Only copy data if the OMAP_CRYPTO_DATA_COPIED flag is set. Fixes: 74ed87e7 ("crypto: omap - add base support library for common ...") Signed-off-by: Ben Hutchings <ben.hutchings@mind.be> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 12 Aug, 2021 8 commits
-
-
Randy Dunlap authored
Don't use "/**" to begin a comment that is not kernel-doc notation. crypto/wp512.c:779: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst * The core Whirlpool transform. Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: linux-crypto@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Weili Qian authored
Kunpeng930 hpre device supports dynamic clock gating. When doing tasks, the algorithm core is opened, and when idle, the algorithm core is closed. This patch enables hpre dynamic clock gating by writing hardware registers. Signed-off-by: Weili Qian <qianweili@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Weili Qian authored
Kunpeng930 sec device supports dynamic clock gating. When doing tasks, the algorithm core is opened, and when idle, the algorithm core is closed. This patch enables sec dynamic clock gating by writing hardware registers. Signed-off-by: Weili Qian <qianweili@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Weili Qian authored
Kunpeng930 zip device supports dynamic clock gating. When executing tasks, the algorithm core is opened, and when idle, the algorithm core is closed. This patch enables zip dynamic clock gating by writing hardware registers. Signed-off-by: Weili Qian <qianweili@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Hongbo Li authored
We should set the additional space to 0 in mpi_resize(). So use kcalloc() instead of kmalloc_array(). In lib/mpi/ec.c: /**************** * Resize the array of A to NLIMBS. the additional space is cleared * (set to 0) [done by m_realloc()] */ int mpi_resize(MPI a, unsigned nlimbs) Like the comment of kernel's mpi_resize() said, the additional space need to be set to 0, but when a->d is not NULL, it does not set. The kernel's mpi lib is from libgcrypt, the mpi resize in libgcrypt is _gcry_mpi_resize() which set the additional space to 0. This bug may cause mpi api which use mpi_resize() get wrong result under the condition of using the additional space without initiation. If this condition is not met, the bug would not be triggered. Currently in kernel, rsa, sm2 and dh use mpi lib, and they works well, so the bug is not triggered in these cases. add_points_edwards() use the additional space directly, so it will get a wrong result. Fixes: cdec9cb5 ("crypto: GnuPG based MPI lib - source files (part 1)") Signed-off-by: Hongbo Li <herberthbli@tencent.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Sebastian Andrzej Siewior authored
The functions get_online_cpus() and put_online_cpus() have been deprecated during the CPU hotplug rework. They map directly to cpus_read_lock() and cpus_read_unlock(). Replace deprecated CPU-hotplug functions with the official version. The behavior remains unchanged. Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: Daniel Jordan <daniel.m.jordan@oracle.com> Cc: linux-crypto@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Daniel Jordan <daniel.m.jordan@oracle.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Sebastian Andrzej Siewior authored
The functions get_online_cpus() and put_online_cpus() have been deprecated during the CPU hotplug rework. They map directly to cpus_read_lock() and cpus_read_unlock(). Replace deprecated CPU-hotplug functions with the official version. The behavior remains unchanged. Cc: Gonglei <arei.gonglei@huawei.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: virtualization@lists.linux-foundation.org Cc: linux-crypto@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Jason Wang authored
The kfree_sensitive is a kernel API to clear sensitive information that should not be leaked to other future users of the same memory objects and free the memory. Its function is the same as the combination of memzero_explicit and kfree. Thus, we can replace the combination APIs with the single kfree_sensitive API. Signed-off-by: Jason Wang <wangborong@cdjrlc.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 06 Aug, 2021 9 commits
-
-
Andre Przywara authored
The "Arm True Random Number Generator Firmware Interface"[1] provides an SMCCC based interface to a true hardware random number generator. So far we are using that in arch_get_random_seed(), but it might be useful to expose the entropy through the /dev/hwrng device as well. This allows to assess the quality of the implementation, by using "rngtest" from the rng-tools package, for example. Add a simple platform driver implementing the hw_random interface. The corresponding platform device is created by the SMCCC core code, we just match it here by name and provide a module alias. Since the firmware takes care about serialisation, this can happily coexist with the arch_get_random_seed() bits. [1] https://developer.arm.com/documentation/den0098/latest/Signed-off-by: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Mark Brown <broonie@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Andre Przywara authored
At the moment we probe for the Random Number Generator SMCCC service, and use that in the core code (arch_get_random). However the hardware entropy can also be useful to access from userland, and be it to assess its quality. Register a platform device when the SMCCC TRNG service is detected, to allow a hw_random driver to hook onto this. The function registering the device is deliberately made in a way which allows expansion, so other services that could be exposed via a platform device (or some other interface), can be added here easily. Signed-off-by: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Mark Brown <broonie@kernel.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Brijesh Singh authored
The commit 97f9ac3d ("crypto: ccp - Add support for SEV-ES to the PSP driver") added support to allocate Trusted Memory Region (TMR) used during the SEV-ES firmware initialization. The TMR gets locked during the firmware initialization and unlocked during the shutdown. While the TMR is locked, access to it is disallowed. Currently, the CCP driver does not shutdown the firmware during the kexec reboot, leaving the TMR memory locked. Register a callback to shutdown the SEV firmware on the kexec boot. Fixes: 97f9ac3d ("crypto: ccp - Add support for SEV-ES to the PSP driver") Reported-by: Lucas Nussbaum <lucas.nussbaum@inria.fr> Tested-by: Lucas Nussbaum <lucas.nussbaum@inria.fr> Cc: <stable@kernel.org> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Joerg Roedel <jroedel@suse.de> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Acked-by: Tom Lendacky <thomas.lendacky@gmail.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tony Lindgren authored
Commit b0a3d898 ("crypto: omap-sham - Use pm_runtime_irq_safe()") added the use of pm_runtime_irq_safe() as pm_runtime_get_sync() was called from a tasklet. We now use the crypto engine queue instead of a custom queue since commit 33c3d434d91 ("crypto: omap-sham - convert to use crypto engine"). We want to drop the use of pm_runtime_irq_safe() in general as it takes a permanent usage count on the parent device causing issues for power management. Based on testing with CONFIG_DEBUG_ATOMIC_SLEEP=y, modprobe omap-sham, followed by modprobe tcrypt sec=1 mode=423, I have not been able to reproduce the scheduling while atomic issue seen earlier with current kernels and we can just drop the call to pm_runtime_irq_safe(). Cc: Lokesh Vutla <lokeshvutla@ti.com> Cc: Tero Kristo <kristo@kernel.org> Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tony Lindgren authored
Let's get rid of the suspend and resume calls to runtime PM as these calls do not idle the hardware. The runtime suspend has been disabled for system suspend since commit 88d26136 ("PM: Prevent runtime suspend during system resume"). Instead of runtime PM, the system suspend and resume functions should call driver internal shared functions to idle the hardware as needed. Cc: Lokesh Vutla <lokeshvutla@ti.com> Cc: Tero Kristo <kristo@kernel.org> Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tony Lindgren authored
FLAGS_INIT is now unused and we can just use standard runtime PM functions instead. Cc: Lokesh Vutla <lokeshvutla@ti.com> Cc: Tero Kristo <kristo@kernel.org> Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tony Lindgren authored
We should pair the usage of pm_runtime_use_autosuspend() with pm_runtime_dont_use_autosuspend(). Cc: Lokesh Vutla <lokeshvutla@ti.com> Cc: Tero Kristo <kristo@kernel.org> Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tony Lindgren authored
Let's only initialize dd->req after omap_sham_hw_init() in case of errors. Looks like leaving dd->req initialized on omap_sham_hw_init() errors is is not causing issues though as we return on errors. So this patch can be applied as clean-up. Cc: Lokesh Vutla <lokeshvutla@ti.com> Cc: Tero Kristo <kristo@kernel.org> Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tony Lindgren authored
We should not clear FLAGS_DMA_ACTIVE before omap_sham_update_dma_stop() is done calling dma_unmap_sg(). We already clear FLAGS_DMA_ACTIVE at the end of omap_sham_update_dma_stop(). The early clearing of FLAGS_DMA_ACTIVE is not causing issues as we do not need to defer anything based on FLAGS_DMA_ACTIVE currently. So this can be applied as clean-up. Cc: Lokesh Vutla <lokeshvutla@ti.com> Cc: Tero Kristo <kristo@kernel.org> Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 30 Jul, 2021 1 commit
-
-
Salah Triki authored
Use swap() instead of implementing it in order to make code more clean. Signed-off-by: Salah Triki <salah.triki@gmail.com> Reviewed-by: Tudor Ambarus <tudor.ambarus@microchip.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-