- 08 Feb, 2019 10 commits
-
-
Eric Biggers authored
The x86 MORUS implementations all fail the improved AEAD tests because they produce the wrong result with some data layouts. The issue is that they assume that if the skcipher_walk API gives 'nbytes' not aligned to the walksize (a.k.a. walk.stride), then it is the end of the data. In fact, this can happen before the end. Also, when the CRYPTO_TFM_REQ_MAY_SLEEP flag is given, they can incorrectly sleep in the skcipher_walk_*() functions while preemption has been disabled by kernel_fpu_begin(). Fix these bugs. Fixes: 56e8e57f ("crypto: morus - Add common SIMD glue code for MORUS") Cc: <stable@vger.kernel.org> # v4.18+ Cc: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The x86 AEGIS implementations all fail the improved AEAD tests because they produce the wrong result with some data layouts. The issue is that they assume that if the skcipher_walk API gives 'nbytes' not aligned to the walksize (a.k.a. walk.stride), then it is the end of the data. In fact, this can happen before the end. Also, when the CRYPTO_TFM_REQ_MAY_SLEEP flag is given, they can incorrectly sleep in the skcipher_walk_*() functions while preemption has been disabled by kernel_fpu_begin(). Fix these bugs. Fixes: 1d373d4e ("crypto: x86 - Add optimized AEGIS implementations") Cc: <stable@vger.kernel.org> # v4.18+ Cc: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The generic MORUS implementations all fail the improved AEAD tests because they produce the wrong result with some data layouts. The issue is that they assume that if the skcipher_walk API gives 'nbytes' not aligned to the walksize (a.k.a. walk.stride), then it is the end of the data. In fact, this can happen before the end. Fix them. Fixes: 396be41f ("crypto: morus - Add generic MORUS AEAD implementations") Cc: <stable@vger.kernel.org> # v4.18+ Cc: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The generic AEGIS implementations all fail the improved AEAD tests because they produce the wrong result with some data layouts. The issue is that they assume that if the skcipher_walk API gives 'nbytes' not aligned to the walksize (a.k.a. walk.stride), then it is the end of the data. In fact, this can happen before the end. Fix them. Fixes: f606a88e ("crypto: aegis - Add generic AEGIS AEAD implementations") Cc: <stable@vger.kernel.org> # v4.18+ Cc: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Pankaj Gupta authored
when the source sg contains more than 1 fragment and destination sg contains 1 fragment, the caam driver mishandle the buffers to be sent to caam. Fixes: f2147b88 ("crypto: caam - Convert GCM to new AEAD interface") Cc: <stable@vger.kernel.org> # 4.2+ Signed-off-by: Pankaj Gupta <pankaj.gupta@nxp.com> Signed-off-by: Arun Pathak <arun.pathak@nxp.com> Reviewed-by: Horia Geanta <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The x86, arm, and arm64 asm implementations of crct10dif are very difficult to understand partly because many of the comments, labels, and macros are named incorrectly: the lengths mentioned are usually off by a factor of two from the actual code. Many other things are unnecessarily convoluted as well, e.g. there are many more fold constants than actually needed and some aren't fully reduced. This series therefore cleans up all these implementations to be much more maintainable. I also made some small optimizations where I saw opportunities, resulting in slightly better performance. This patch cleans up the arm64 version. Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The x86, arm, and arm64 asm implementations of crct10dif are very difficult to understand partly because many of the comments, labels, and macros are named incorrectly: the lengths mentioned are usually off by a factor of two from the actual code. Many other things are unnecessarily convoluted as well, e.g. there are many more fold constants than actually needed and some aren't fully reduced. This series therefore cleans up all these implementations to be much more maintainable. I also made some small optimizations where I saw opportunities, resulting in slightly better performance. This patch cleans up the arm version. (Also moved the constants to .rodata as suggested by Ard Biesheuvel.) Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
The x86, arm, and arm64 asm implementations of crct10dif are very difficult to understand partly because many of the comments, labels, and macros are named incorrectly: the lengths mentioned are usually off by a factor of two from the actual code. Many other things are unnecessarily convoluted as well, e.g. there are many more fold constants than actually needed and some aren't fully reduced. This series therefore cleans up all these implementations to be much more maintainable. I also made some small optimizations where I saw opportunities, resulting in slightly better performance. This patch cleans up the x86 version. As part of this, I removed support for len < 16 from the x86 assembly; now the glue code falls back to the generic table-based implementation in this case. Due to the overhead of kernel_fpu_begin(), this actually significantly improves performance on these lengths. (And even if kernel_fpu_begin() were free, the generic code is still faster for about len < 11.) This removal also eliminates error-prone special cases and makes the x86, arm32, and arm64 ports of the code match more closely. Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Singh, Brijesh authored
A kexec reboot may leave the firmware in INIT or WORKING state. Currently, we issue PLATFORM_INIT command during the probe without checking the current state. The PLATFORM_INIT command fails if the FW is already in INIT state. Lets check the current state, if FW is not in UNINIT state then transition it to UNINIT before initializing or upgrading the FW. Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Gary Hook <gary.hook@amd.com> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Christopher Diaz Riveros authored
Fixes coccinnelle alerts: /crypto/testmgr.c:2112:13-20: WARNING opportunity for kmemdup /crypto/testmgr.c:2130:13-20: WARNING opportunity for kmemdup /crypto/testmgr.c:2152:9-16: WARNING opportunity for kmemdup Signed-off-by: Christopher Diaz Riveros <chrisadr@gentoo.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 01 Feb, 2019 30 commits
-
-
Ard Biesheuvel authored
The arm64 CRC-T10DIF implementation either uses 8-bit or 64-bit polynomial multiplication instructions, since the latter are faster but not mandatory in the architecture. Since that prevents us from testing both implementations on the same system, let's expose both implementations to the crypto API, with the priorities reflecting that the P64 version is the preferred one if available. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
Remove some code that is no longer called now that we make sure never to invoke the SIMD routine with less than 16 bytes of input. Reviewed-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
Remove some code that is no longer called now that we make sure never to invoke the SIMD routine with less that 16 bytes of input. Reviewed-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
The SIMD routine ported from x86 used to have a special code path for inputs < 16 bytes, which got lost somewhere along the way. Instead, the current glue code aligns the input pointer to 16 bytes, which is not really necessary on this architecture (although it could be beneficial to performance to expose aligned data to the the NEON routine), but this could result in inputs of less than 16 bytes to be passed in. This not only fails the new extended tests that Eric has implemented, it also results in the code reading past the end of the input, which could potentially result in crashes when dealing with less than 16 bytes of input at the end of a page which is followed by an unmapped page. So update the glue code to only invoke the NEON routine if the input is at least 16 bytes. Reported-by: Eric Biggers <ebiggers@kernel.org> Reviewed-by: Eric Biggers <ebiggers@kernel.org> Fixes: 6ef5737f ("crypto: arm64/crct10dif - port x86 SSE implementation to arm64") Cc: <stable@vger.kernel.org> # v4.10+ Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
The SIMD routine ported from x86 used to have a special code path for inputs < 16 bytes, which got lost somewhere along the way. Instead, the current glue code aligns the input pointer to permit the NEON routine to use special versions of the vld1 instructions that assume 16 byte alignment, but this could result in inputs of less than 16 bytes to be passed in. This not only fails the new extended tests that Eric has implemented, it also results in the code reading past the end of the input, which could potentially result in crashes when dealing with less than 16 bytes of input at the end of a page which is followed by an unmapped page. So update the glue code to only invoke the NEON routine if the input is at least 16 bytes. Reported-by: Eric Biggers <ebiggers@kernel.org> Reviewed-by: Eric Biggers <ebiggers@kernel.org> Fixes: 1d481f1c ("crypto: arm/crct10dif - port x86 SSE implementation to ARM") Cc: <stable@vger.kernel.org> # v4.10+ Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Horia Geantă authored
Roland reports the following issue and provides a root cause analysis: "On a v4.19 i.MX6 system with IMA and CONFIG_DMA_API_DEBUG enabled, a warning is generated when accessing files on a filesystem for which IMA measurement is enabled: ------------[ cut here ]------------ WARNING: CPU: 0 PID: 1 at kernel/dma/debug.c:1181 check_for_stack.part.9+0xd0/0x120 caam_jr 2101000.jr0: DMA-API: device driver maps memory from stack [addr=b668049e] Modules linked in: CPU: 0 PID: 1 Comm: switch_root Not tainted 4.19.0-20181214-1 #2 Hardware name: Freescale i.MX6 Quad/DualLite (Device Tree) Backtrace: [<c010efb8>] (dump_backtrace) from [<c010f2d0>] (show_stack+0x20/0x24) [<c010f2b0>] (show_stack) from [<c08b04f4>] (dump_stack+0xa0/0xcc) [<c08b0454>] (dump_stack) from [<c012b610>] (__warn+0xf0/0x108) [<c012b520>] (__warn) from [<c012b680>] (warn_slowpath_fmt+0x58/0x74) [<c012b62c>] (warn_slowpath_fmt) from [<c0199acc>] (check_for_stack.part.9+0xd0/0x120) [<c01999fc>] (check_for_stack.part.9) from [<c019a040>] (debug_dma_map_page+0x144/0x174) [<c0199efc>] (debug_dma_map_page) from [<c065f7f4>] (ahash_final_ctx+0x5b4/0xcf0) [<c065f240>] (ahash_final_ctx) from [<c065b3c4>] (ahash_final+0x1c/0x20) [<c065b3a8>] (ahash_final) from [<c03fe278>] (crypto_ahash_op+0x38/0x80) [<c03fe240>] (crypto_ahash_op) from [<c03fe2e0>] (crypto_ahash_final+0x20/0x24) [<c03fe2c0>] (crypto_ahash_final) from [<c03f19a8>] (ima_calc_file_hash+0x29c/0xa40) [<c03f170c>] (ima_calc_file_hash) from [<c03f2b24>] (ima_collect_measurement+0x1dc/0x240) [<c03f2948>] (ima_collect_measurement) from [<c03f0a60>] (process_measurement+0x4c4/0x6b8) [<c03f059c>] (process_measurement) from [<c03f0cdc>] (ima_file_check+0x88/0xa4) [<c03f0c54>] (ima_file_check) from [<c02d8adc>] (path_openat+0x5d8/0x1364) [<c02d8504>] (path_openat) from [<c02dad24>] (do_filp_open+0x84/0xf0) [<c02daca0>] (do_filp_open) from [<c02cf50c>] (do_open_execat+0x84/0x1b0) [<c02cf488>] (do_open_execat) from [<c02d1058>] (__do_execve_file+0x43c/0x890) [<c02d0c1c>] (__do_execve_file) from [<c02d1770>] (sys_execve+0x44/0x4c) [<c02d172c>] (sys_execve) from [<c0101000>] (ret_fast_syscall+0x0/0x28) ---[ end trace 3455789a10e3aefd ]--- The cause is that the struct ahash_request *req is created as a stack-local variable up in the stack (presumably somewhere in the IMA implementation), then passed down into the CAAM driver, which tries to dma_single_map the req->result (indirectly via map_seq_out_ptr_result) in order to make that buffer available for the CAAM to store the result of the following hash operation. The calling code doesn't know how req will be used by the CAAM driver, and there could be other such occurrences where stack memory is passed down to the CAAM driver. Therefore we should rather fix this issue in the CAAM driver where the requirements are known." Fix this problem by: -instructing the crypto engine to write the final hash in state->caam_ctx -subsequently memcpy-ing the final hash into req->result Cc: <stable@vger.kernel.org> # v4.19+ Reported-by: Roland Hieber <rhi@pengutronix.de> Signed-off-by: Horia Geantă <horia.geanta@nxp.com> Tested-by: Roland Hieber <rhi@pengutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
The arm64 GHASH implementation either uses 8-bit or 64-bit polynomial multiplication instructions, since the latter are faster but not mandatory in the architecture. Since that prevents us from testing both implementations on the same system, let's expose both implementations to the crypto API, with the priorities reflecting that the P64 version is the preferred one if available. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Milan Broz authored
The CRC32 is not a cryptographic hash algorithm, so the FIPS restrictions should not apply to it. (The CRC32C variant is already allowed.) This CRC32 variant is used for in dm-crypt legacy TrueCrypt IV implementation (tcw); detected by cryptsetup test suite failure in FIPS mode. Signed-off-by: Milan Broz <gmazyland@gmail.com> Reviewed-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Masahiro Yamada authored
The header search path -I. in kernel Makefiles is very suspicious; it allows the compiler to search for headers in the top of $(srctree), where obviously no header file exists. 'git grep BCMDRIVER' has no hit. So, this macro is not referenced. I was able to build this driver without the extra compiler options. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Masahiro Yamada authored
Currently, the Kbuild core manipulates header search paths in a crazy way [1]. To fix this mess, I want all Makefiles to add explicit $(srctree)/ to the search paths in the srctree. Some Makefiles are already written in that way, but not all. The goal of this work is to make the notation consistent, and finally get rid of the gross hacks. Having whitespaces after -I does not matter since commit 48f6e3cf ("kbuild: do not drop -I without parameter"). [1]: https://patchwork.kernel.org/patch/9632347/Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
When the AES-CCM code was first added, the NEON register were saved and restored eagerly, and so the code avoided doing so, and executed the scatterwalk in atomic context inside the kernel_neon_begin/end section. This has been changed in the meantime, so switch to non-atomic scatterwalks. Fixes: bd2ad885 ("crypto: arm64/aes-ce-ccm - move kernel mode neon ...") Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
Commit 5092fcf3 ("crypto: arm64/aes-ce-ccm: add non-SIMD generic fallback") introduced C fallback code to replace the NEON routines when invoked from a context where the NEON is not available (i.e., from the context of a softirq taken while the NEON is already being used in kernel process context) Fix two logical flaws in the MAC calculation of the associated data. Reported-by: Eric Biggers <ebiggers@kernel.org> Fixes: 5092fcf3 ("crypto: arm64/aes-ce-ccm: add non-SIMD generic fallback") Cc: stable@vger.kernel.org Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ard Biesheuvel authored
The NEON MAC calculation routine fails to handle the case correctly where there is some data in the buffer, and the input fills it up exactly. In this case, we enter the loop at the end with w8 == 0, while a negative value is assumed, and so the loop carries on until the increment of the 32-bit counter wraps around, which is quite obviously wrong. So omit the loop altogether in this case, and exit right away. Reported-by: Eric Biggers <ebiggers@kernel.org> Fixes: a3fd8210 ("arm64/crypto: AES in CCM mode using ARMv8 Crypto ...") Cc: stable@vger.kernel.org Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Instantiating "cryptd(crc32c)" causes a crypto self-test failure because the crypto_alloc_shash() in alg_test_crc32c() fails. This is because cryptd(crc32c) is an ahash algorithm, not a shash algorithm; so it can only be accessed through the ahash API, unlike shash algorithms which can be accessed through both the ahash and shash APIs. As the test is testing the shash descriptor format which is only applicable to shash algorithms, skip it for ahash algorithms. (Note that it's still important to fix crypto self-test failures even for weird algorithm instantiations like cryptd(crc32c) that no one would really use; in fips_enabled mode unprivileged users can use them to panic the kernel, and also they prevent treating a crypto self-test failure as a bug when fuzzing the kernel.) Fixes: 8e3ee85e ("crypto: crc32c - Test descriptor context format") Cc: stable@vger.kernel.org Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Vincent Whitchurch authored
The request unmap and bounce buffer copying is currently unnecessarily done while holding the queue spin lock. Signed-off-by: Lars Persson <larper@axis.com> Signed-off-by: Vincent Whitchurch <rabinv@axis.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Lars Persson authored
Avoid plain memcmp() on the AEAD tag value as this could leak information through a timing side channel. Signed-off-by: Lars Persson <larper@axis.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Lars Persson authored
The implementation assumed that the client always wants the whole 16 byte AES-GCM tag. Now we respect the requested authentication tag size fetched using crypto_aead_authsize(). Signed-off-by: Lars Persson <larper@axis.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Lars Persson authored
The driver was optimized to only do cache maintenance for the last word of the dma descriptor status array. Unfortunately an omission also passed the last word as the address of the array start to the DMA engine. In most cases this goes unnoticed since the hardware aligns the address to a 64 byte boundary. Signed-off-by: Lars Persson <larper@axis.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Lars Persson authored
Clients may submit a new requests from the completion callback context. The driver was not prepared to receive a request in this state because it already held the request queue lock and a recursive lock error is triggered. Now all completions are queued up until we are ready to drop the queue lock and then delivered. The fault was triggered by TCP over an IPsec connection in the LTP test suite: LTP: starting tcp4_ipsec02 (tcp_ipsec.sh -p ah -m transport -s "100 1000 65535") BUG: spinlock recursion on CPU#1, genload/943 lock: 0xbf3c3094, .magic: dead4ead, .owner: genload/943, .owner_cpu: 1 CPU: 1 PID: 943 Comm: genload Tainted: G O 4.9.62-axis5-devel #6 Hardware name: Axis ARTPEC-6 Platform (unwind_backtrace) from [<8010d134>] (show_stack+0x18/0x1c) (show_stack) from [<803a289c>] (dump_stack+0x84/0x98) (dump_stack) from [<8016e164>] (do_raw_spin_lock+0x124/0x128) (do_raw_spin_lock) from [<804de1a4>] (artpec6_crypto_submit+0x2c/0xa0) (artpec6_crypto_submit) from [<804def38>] (artpec6_crypto_prepare_submit_hash+0xd0/0x54c) (artpec6_crypto_prepare_submit_hash) from [<7f3165f0>] (ah_output+0x2a4/0x3dc [ah4]) (ah_output [ah4]) from [<805df9bc>] (xfrm_output_resume+0x178/0x4a4) (xfrm_output_resume) from [<805d283c>] (xfrm4_output+0xac/0xbc) (xfrm4_output) from [<80587928>] (ip_queue_xmit+0x140/0x3b4) (ip_queue_xmit) from [<805a13b4>] (tcp_transmit_skb+0x4c4/0x95c) (tcp_transmit_skb) from [<8059f218>] (tcp_rcv_state_process+0xdf4/0xdfc) (tcp_rcv_state_process) from [<805a7530>] (tcp_v4_do_rcv+0x64/0x1ac) (tcp_v4_do_rcv) from [<805a9724>] (tcp_v4_rcv+0xa34/0xb74) (tcp_v4_rcv) from [<80581d34>] (ip_local_deliver_finish+0x78/0x2b0) (ip_local_deliver_finish) from [<8058259c>] (ip_local_deliver+0xe4/0x104) (ip_local_deliver) from [<805d23ec>] (xfrm4_transport_finish+0xf4/0x144) (xfrm4_transport_finish) from [<805df564>] (xfrm_input+0x4f4/0x74c) (xfrm_input) from [<804de420>] (artpec6_crypto_task+0x208/0x38c) (artpec6_crypto_task) from [<801271b0>] (tasklet_action+0x60/0xec) (tasklet_action) from [<801266d4>] (__do_softirq+0xcc/0x3a4) (__do_softirq) from [<80126d20>] (irq_exit+0xf4/0x15c) (irq_exit) from [<801741e8>] (__handle_domain_irq+0x68/0xbc) (__handle_domain_irq) from [<801014f0>] (gic_handle_irq+0x50/0x94) (gic_handle_irq) from [<80657370>] (__irq_usr+0x50/0x80) Signed-off-by: Lars Persson <larper@axis.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Lars Persson authored
The hardware cannot restore the context correctly when it operates in SHA512 mode. This is too restrictive when operating in a framework that can interleave multiple hash sessions. Signed-off-by: Lars Persson <larper@axis.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Lars Persson authored
The hardware implementation of SHA384 was not correct and it cannot be used in any situation. Signed-off-by: Lars Persson <larper@axis.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Corentin Labbe authored
When building without CONFIG_HW_RANDOM_PPC4XX, I hit the following build failure: drivers/crypto/amcc/crypto4xx_core.c: In function 'crypto4xx_probe': drivers/crypto/amcc/crypto4xx_core.c:1407:20: error: passing argument 1 of 'ppc4xx_trng_probe' from incompatible pointer type [-Werror=incompatible-pointer-types] In file included from drivers/crypto/amcc/crypto4xx_core.c:50:0: drivers/crypto/amcc/crypto4xx_trng.h:28:20: note: expected 'struct crypto4xx_device *' but argument is of type 'struct crypto4xx_core_device *' drivers/crypto/amcc/crypto4xx_core.c: In function 'crypto4xx_remove': drivers/crypto/amcc/crypto4xx_core.c:1434:21: error: passing argument 1 of 'ppc4xx_trng_remove' from incompatible pointer type [-Werror=incompatible-pointer-types] In file included from drivers/crypto/amcc/crypto4xx_core.c:50:0: drivers/crypto/amcc/crypto4xx_trng.h:30:20: note: expected 'struct crypto4xx_device *' but argument is of type 'struct crypto4xx_core_device *' This patch fix the needed argument of ppc4xx_trng_probe()/ppc4xx_trng_remove() in that case. Fixes: 5343e674 ("crypto4xx: integrate ppc4xx-rng into crypto4xx") Signed-off-by: Corentin Labbe <clabbe@baylibre.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
YueHaibing authored
Fix a static code checker warning: drivers/crypto/chelsio/chcr_algo.c:3681 chcr_aead_op() warn: passing zero to 'PTR_ERR' Fixes: 2debd332 ("crypto: chcr - Add AEAD algos.") Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Roland Hieber authored
Signed-off-by: Roland Hieber <rhi@pengutronix.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Greg Kroah-Hartman authored
When calling debugfs functions, there is no need to ever check the return value. The function can work or not, but the code logic should never do something different based on this. Cc: "Horia Geantă" <horia.geanta@nxp.com> Cc: Aymen Sghaier <aymen.sghaier@nxp.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: linux-crypto@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Horia Geantă <horia.geanta@nxp.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Greg Kroah-Hartman authored
When calling debugfs functions, there is no need to ever check the return value. The function can work or not, but the code logic should never do something different based on this. Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: Gary Hook <gary.hook@amd.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: linux-crypto@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Acked-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Greg Kroah-Hartman authored
When calling debugfs functions, there is no need to ever check the return value. The function can work or not, but the code logic should never do something different based on this. Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: Srikanth Jampala <Jampala.Srikanth@cavium.com> Cc: Yangtao Li <tiny.windzz@gmail.com> Cc: Gadam Sreerama <sgadam@cavium.com> Cc: Eric Biggers <ebiggers@google.com> Cc: linux-crypto@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Greg Kroah-Hartman authored
When calling debugfs functions, there is no need to ever check the return value. The function can work or not, but the code logic should never do something different based on this. Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: Robert Richter <rrichter@cavium.com> Cc: Jan Glauber <jglauber@cavium.com> Cc: linux-crypto@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Jan Glauber <jglauber@cavium.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Greg Kroah-Hartman authored
When calling debugfs functions, there is no need to ever check the return value. The function can work or not, but the code logic should never do something different based on this. Cc: Jesper Nilsson <jesper.nilsson@axis.com> Cc: Lars Persson <lars.persson@axis.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: linux-arm-kernel@axis.com Cc: linux-crypto@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Acked-by: Lars Persson <lars.persson@axis.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Greg Kroah-Hartman authored
When calling debugfs functions, there is no need to ever check the return value. The function can work or not, but the code logic should never do something different based on this. Cc: Yael Chemla <yael.chemla@foss.arm.com> Cc: Gilad Ben-Yossef <gilad@benyossef.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: linux-crypto@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Acked-By: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-