- 13 Jul, 2018 2 commits
-
-
Tom Lendacky authored
Add a dev_notice() message to the PSP initialization to report when the PSP initialization has succeeded and the PSP is enabled. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Acked-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Tom Lendacky authored
The wait_event() function is used to detect command completion. The interrupt handler will set the wait condition variable when the interrupt is triggered. However, the variable used for wait_event() is initialized after the command has been submitted, which can create a race condition with the interrupt handler and result in the wait_event() never returning. Move the initialization of the wait condition variable to just before command submission. Fixes: 200664d5 ("crypto: ccp: Add Secure Encrypted Virtualization (SEV) command support") Cc: <stable@vger.kernel.org> # 4.16.x- Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Brijesh Singh <brijesh.singh@amd.com> Acked-by: Gary R Hook <gary.hook@amd.com> Acked-by: Gary R Hook <gary.hook@amd.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
- 08 Jul, 2018 38 commits
-
-
Gilad Ben-Yossef authored
A debug print about register status post interrupt can happen quite often. Rate limit it to avoid cluttering the log. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Gilad Ben-Yossef authored
The ccree driver implemented NIST 800-38A CBC-CS2 ciphertext format, which only reverses the last two blocks if the stolen ciphertext amount are none zero. Move it to the kernel chosen format of CBC-CS3 which swaps the final blocks unconditionally and rename it to "cts" now that it complies with the kernel format and passes the self tests. Ironically, the CryptoCell REE HW does just that, so the fix is dropping the code that forced it to use plain CBC if the ciphertext was block aligned. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Gilad Ben-Yossef authored
Remove legacy code no longer used by anything. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Gilad Ben-Yossef authored
We were copying our last cipher block into the request for use as IV for all modes of operations. Fix this by discerning the behaviour based on the mode of operation used: copy ciphertext for CBC, update counter for CTR. CC: stable@vger.kernel.org Fixes: 63ee04c8 ("crypto: ccree - add skcipher support") Reported by: Hadar Gat <hadar.gat@arm.com> Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Gilad Ben-Yossef authored
The testmgr hash tests were testing init, digest, update and final methods but not the finup method. Add a test for this one too. While doing this, make sure we only run the partial tests once with the digest tests and skip them with the final and finup tests since they are the same. Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Hadar Gat authored
finup() operation was incorrect, padding was missing. Fix by setting the ccree HW to enable padding. Signed-off-by: Hadar Gat <hadar.gat@arm.com> [ gilad@benyossef.com: refactored for better code sharing ] Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com> Cc: stable@vger.kernel.org Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Some crypto API users allocating a tfm with crypto_alloc_$FOO() are also specifying the type flags for $FOO, e.g. crypto_alloc_shash() with CRYPTO_ALG_TYPE_SHASH. But, that's redundant since the crypto API will override any specified type flag/mask with the correct ones. So, remove the unneeded flags. This patch shouldn't change any actual behavior. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Some skcipher algorithms set .cra_flags = CRYPTO_ALG_TYPE_SKCIPHER. But this is redundant with the C structure type ('struct skcipher_alg'), and crypto_register_skcipher() already sets the type flag automatically, clearing any type flag that was already there. Apparently the useless assignment has just been copy+pasted around. So, remove the useless assignment from all the skcipher algorithms. This patch shouldn't change any actual behavior. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Some aead algorithms set .cra_flags = CRYPTO_ALG_TYPE_AEAD. But this is redundant with the C structure type ('struct aead_alg'), and crypto_register_aead() already sets the type flag automatically, clearing any type flag that was already there. Apparently the useless assignment has just been copy+pasted around. So, remove the useless assignment from all the aead algorithms. This patch shouldn't change any actual behavior. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Some ahash algorithms set .cra_type = &crypto_ahash_type. But this is redundant with the C structure type ('struct ahash_alg'), and crypto_register_ahash() already sets the .cra_type automatically. Apparently the useless assignment has just been copy+pasted around. So, remove the useless assignment from all the ahash algorithms. This patch shouldn't change any actual behavior. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Many ahash algorithms set .cra_flags = CRYPTO_ALG_TYPE_AHASH. But this is redundant with the C structure type ('struct ahash_alg'), and crypto_register_ahash() already sets the type flag automatically, clearing any type flag that was already there. Apparently the useless assignment has just been copy+pasted around. So, remove the useless assignment from all the ahash algorithms. This patch shouldn't change any actual behavior. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Gilad Ben-Yossef <gilad@benyossef.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
Many shash algorithms set .cra_flags = CRYPTO_ALG_TYPE_SHASH. But this is redundant with the C structure type ('struct shash_alg'), and crypto_register_shash() already sets the type flag automatically, clearing any type flag that was already there. Apparently the useless assignment has just been copy+pasted around. So, remove the useless assignment from all the shash algorithms. This patch shouldn't change any actual behavior. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
With all the crypto modules enabled on x86, and with a CPU that supports AVX-2 but not SHA-NI instructions (e.g. Haswell, Broadwell, Skylake), the "multibuffer" implementations of SHA-1, SHA-256, and SHA-512 are the highest priority. However, these implementations only perform well when many hash requests are being submitted concurrently, filling all 8 AVX-2 lanes. Otherwise, they are incredibly slow, as they waste time waiting for more requests to arrive before proceeding to execute each request. For example, here are the speeds I see hashing 4096-byte buffers with a single thread on a Haswell-based processor: generic avx2 mb (multibuffer) ------- -------- ---------------- sha1 602 MB/s 997 MB/s 0.61 MB/s sha256 228 MB/s 412 MB/s 0.61 MB/s sha512 312 MB/s 559 MB/s 0.61 MB/s So, the multibuffer implementation is 500 to 1000 times slower than the other implementations. Note that with smaller buffers or more update()s per digest, the difference would be even greater. I believe the vast majority of people are in the boat where the multibuffer code is much slower, and only a small minority are doing the highly parallel, hashing-intensive, latency-flexible workloads (maybe IPsec on servers?) where the multibuffer code may be beneficial. Yet, people often aren't familiar with all the crypto config options and so the multibuffer code may inadvertently be built into the kernel. Also the multibuffer code apparently hasn't been very well tested, seeing as it was sometimes computing the wrong SHA-256 digest. So, let's make the multibuffer algorithms low priority. Users who want to use them can either request them explicitly by driver name, or use NETLINK_CRYPTO (crypto_user) to increase their priority at runtime. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
sha512-generic and sha384-generic had a cra_priority of 0, so it wasn't possible to have a lower priority SHA-512 or SHA-384 implementation, as is desired for sha512_mb which is only useful under certain workloads and is otherwise extremely slow. Change them to priority 100, which is the priority used for many of the other generic algorithms. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
sha256-generic and sha224-generic had a cra_priority of 0, so it wasn't possible to have a lower priority SHA-256 or SHA-224 implementation, as is desired for sha256_mb which is only useful under certain workloads and is otherwise extremely slow. Change them to priority 100, which is the priority used for many of the other generic algorithms. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
sha1-generic had a cra_priority of 0, so it wasn't possible to have a lower priority SHA-1 implementation, as is desired for sha1_mb which is only useful under certain workloads and is otherwise extremely slow. Change it to priority 100, which is the priority used for many of the other generic algorithms. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
"arch/x86/crypto/sha*-mb" needs a trailing slash, since it refers to directories. Otherwise get_maintainer.pl doesn't find the entry. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Eric Biggers authored
There is a copy-paste error where sha256_mb_mgr_get_comp_job_avx2() copies the SHA-256 digest state from sha256_mb_mgr::args::digest to job_sha256::result_digest. Consequently, the sha256_mb algorithm sometimes calculates the wrong digest. Fix it. Reproducer using AF_ALG: #include <assert.h> #include <linux/if_alg.h> #include <stdio.h> #include <string.h> #include <sys/socket.h> #include <unistd.h> static const __u8 expected[32] = "\xad\x7f\xac\xb2\x58\x6f\xc6\xe9\x66\xc0\x04\xd7\xd1\xd1\x6b\x02" "\x4f\x58\x05\xff\x7c\xb4\x7c\x7a\x85\xda\xbd\x8b\x48\x89\x2c\xa7"; int main() { int fd; struct sockaddr_alg addr = { .salg_type = "hash", .salg_name = "sha256_mb", }; __u8 data[4096] = { 0 }; __u8 digest[32]; int ret; int i; fd = socket(AF_ALG, SOCK_SEQPACKET, 0); bind(fd, (void *)&addr, sizeof(addr)); fork(); fd = accept(fd, 0, 0); do { ret = write(fd, data, 4096); assert(ret == 4096); ret = read(fd, digest, 32); assert(ret == 32); } while (memcmp(digest, expected, 32) == 0); printf("wrong digest: "); for (i = 0; i < 32; i++) printf("%02x", digest[i]); printf("\n"); } Output was: wrong digest: ad7facb2000000000000000000000000ffffffef7cb47c7a85dabd8b48892ca7 Fixes: 172b1d6b ("crypto: sha256-mb - fix ctx pointer and digest copy") Cc: <stable@vger.kernel.org> # v4.8+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ofer Heifetz authored
This patch main goal is to improve driver performance by moving the crypto request from a list to a RDR ring shadow. This is possible since there is one producer and one consume for this RDR request shadow and one ring descriptor is left unused. Doing this change eliminates the use of spinlock when accessing the descriptor ring and the need to dynamicaly allocate memory per crypto request. The crypto request is placed in the first RDR shadow descriptor only if there are enough descriptors, when the result handler is invoked, it fetches the first result descriptor from RDR shadow. Signed-off-by: Ofer Heifetz <oferh@marvell.com> Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ofer Heifetz authored
This patch adds support for two new algorithms in the Inside Secure SafeXcel cryptographic engine driver: ecb(des3_ede) and cbc(des3_ede). Signed-off-by: Ofer Heifetz <oferh@marvell.com> Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ofer Heifetz authored
This patch adds support for two algorithms in the Inside Secure SafeXcel cryptographic engine driver: ecb(des) and cbc(des). Signed-off-by: Ofer Heifetz <oferh@marvell.com> Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ofer Heifetz authored
This patch adds support for the hmac(md5) algorithm in the Inside Secure SafeXcel cryptographic engine driver. Signed-off-by: Ofer Heifetz <oferh@marvell.com> Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ofer Heifetz authored
This patch adds the MD5 algorithm support to the Inside Secure SafeXcel cryptographic engine driver. Signed-off-by: Ofer Heifetz <oferh@marvell.com> Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ofer Heifetz authored
The ORO bridge (connected to the EIP197 write channel) does not generate back pressure towards the EIP197 when its internal FIFO is full. It assumes that the EIP will not drive more write transactions than the maximal supported outstanding (32). Hence tx_max_cmd_queue must be configured to 5 (or less). Signed-off-by: Ofer Heifetz <oferh@marvell.com> Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ofer Heifetz authored
This patch adds extra steps in the module removal path, to reset the command and result rings. The corresponding interrupts are cleared, and the ring address configuration is reset. Signed-off-by: Ofer Heifetz <oferh@marvell.com> [Antoine: small reworks, commit message] Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ofer Heifetz authored
This patch updates the TRC configuration so that the version of the EIP197 engine being used is taken into account, as the configuration differs between the EIP197B and the EIP197D. Signed-off-by: Ofer Heifetz <oferh@marvell.com> [Antoine: commit message] Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Antoine Tenart authored
This patch documents the new compatible used for the eip197d engine, as this new engine is now supported by the Inside Secure SafeXcel cryptographic driver. Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Antoine Tenart authored
This patch adds support for the eip197d engine to the Inside Secure SafeXcel cryptographic driver. This new engine is similar to the eip197b and reuse most of its code. Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ofer Heifetz authored
So far a single processing engine (PE) was configured and used in the Inside Secure SafeXcel cryptographic engine driver. Some versions have more than a single PE. This patch rework the driver's initialization to take this into account and to allow configuring more than one PE. Signed-off-by: Ofer Heifetz <oferh@marvell.com> [Antoine: some reworks and commit message.] Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Ofer Heifetz authored
The Inside Secure SafeXcel driver currently uses 4 rings, but the eip197d engines has 8 of them. This patch updates the driver so that rings are allocated dynamically based on the number of available rings supported by a given engine. Signed-off-by: Ofer Heifetz <oferh@marvell.com> Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Antoine Tenart authored
Add a flags field in the private structure, and a first flag for engines needing context invalidation (currently only the eip197b). The invalidation is needed when the engine includes a TRC cache, which will also be true for the upcoming addition of the eip197d engine. Suggested-by: Ofer Heifetz <oferh@marvell.com> Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Antoine Tenart authored
EIP engines do not support the same set of algorithms. So far the supported engines in the Inside Secure SafeXcel driver support the same set of algorithms, but that won't be true for all engines. This patch adds an 'engines' field in the algorithm definitions so that they only are registered when using a compatible cryptographic engine. Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Antoine Tenart authored
The compatibles were updated in the Inside Secure SafeXcel cryptographic driver, as the ones previously used were not specific enough. The old compatibles are still supported by the driver for backward compatibility. This patch updates the documentation accordingly. Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Antoine Tenart authored
At first we used two compatibles in the SafeXcel driver, named after the engine revision: eip97 and eip197. However this family of engines has more precise versions and in fact we're supporting the eip97ies and eip197b. More versions will be supported in the future, such as the eip197d, and we'll need to differentiate them. This patch fixes the compatibles used in the driver, to now use precise ones. The two historical compatibles are kept for backward compatibility. Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Antoine Tenart authored
This patch moves the firmware loaded by the Inside Secure SafeXcel driver from /lib/firmware/ to /lib/firmware/inside-secure/eip197b/. This prepares the driver for future patches which will support other revisions of the EIP197 crypto engine as they'll have their own firmwares. To keep the compatibility of what was done, the old path is still supported as a fallback for the EIP197b (currently the only one supported by the driver that loads a firmware). Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
Stephan Mueller authored
According to SP800-56A section 5.6.2.1, the public key to be processed for the DH operation shall be checked for appropriateness. The check shall covers the full verification test in case the domain parameter Q is provided as defined in SP800-56A section 5.6.2.3.1. If Q is not provided, the partial check according to SP800-56A section 5.6.2.3.2 is performed. The full verification test requires the presence of the domain parameter Q. Thus, the patch adds the support to handle Q. It is permissible to not provide the Q value as part of the domain parameters. This implies that the interface is still backwards-compatible where so far only P and G are to be provided. However, if Q is provided, it is imported. Without the test, the NIST ACVP testing fails. After adding this check, the NIST ACVP testing passes. Testing without providing the Q domain parameter has been performed to verify the interface has not changed. Signed-off-by: Stephan Mueller <smueller@chronox.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
lionel.debieve@st.com authored
Adding pm and pm_runtime support to STM32 CRC. Signed-off-by: Lionel Debieve <lionel.debieve@st.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-
lionel.debieve@st.com authored
Adding pm and pm_runtime support to STM32 HASH. Signed-off-by: Lionel Debieve <lionel.debieve@st.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
-